Moderate: Red Hat Ceph Storage Security, Bug Fix, and Enhancement Update

Related Vulnerabilities: CVE-2022-0670  

Synopsis

Moderate: Red Hat Ceph Storage Security, Bug Fix, and Enhancement Update

Type/Severity

Security Advisory: Moderate

Red Hat Insights patch analysis

Identify and remediate systems affected by this advisory.

View affected systems

Topic

An update is now available for Red Hat Ceph Storage 5.2.

Red Hat Product Security has rated this update as having a security impact of Moderate. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) in the References section.

Description

Red Hat Ceph Storage is a scalable, open, software-defined storage platform that combines the most stable version of the Ceph storage system with a Ceph management platform, deployment utilities, and support services.

The ceph-ansible package provides Ansible playbooks for installing, maintaining, and upgrading Red Hat Ceph Storage.

Perf Tools is a collection of performance analysis tools, including a high-performance multi-threaded malloc() implementation that works particularly well with threads and STL, a thread-friendly heap-checker, a heap profiler, and a cpu-profiler.

The libunwind packages contain a C API to determine the call chain of a program. This API is necessary for compatibility with Google Performance Tools (gperftools).

nfs-ganesha : NFS-GANESHA is a NFS Server running in user space. It comes with various back-end modules (called FSALs) provided as shared objects to support different file systems and name-spaces.

The following packages have been upgraded to a later upstream version: ceph (16.2.8), ceph-ansible (6.0.27.9), cephadm-ansible (1.8.0), gperftools (2.9.1), leveldb (1.23), libunwind (1.5.0), nfs-ganesha (3.5), oath-toolkit (2.6.7). (BZ#1623330, BZ#1942171, BZ#1977888, BZ#1997480, BZ#1997996, BZ#2006214, BZ#2006771, BZ#2013215, BZ#2018906, BZ#2024720, BZ#2028628, BZ#2029307, BZ#2030540, BZ#2039669, BZ#2041563, BZ#2041571, BZ#2042417, BZ#2042602, BZ#2043602, BZ#2047487, BZ#2048681, BZ#2049272, BZ#2053468, BZ#2053591, BZ#2055173, BZ#2057307, BZ#2060278, BZ#2064627, BZ#2077843, BZ#2080242)

Security Fix(es):

  • ceph: user/tenant can obtain access (read/write) to any share (CVE-2022-0670)

For more details about the security issue(s), including the impact, a CVSS score, acknowledgments, and other related information, refer to the CVE page(s) listed in the References section.

Additional Changes:

This update also fixes several bugs and adds various enhancements. Documentation for these changes is available from the Release Notes document linked to in the References section.

Solution

For details on how to apply this update, which includes the changes described in this advisory, refer to:

https://access.redhat.com/articles/11258

Affected Products

  • Red Hat Enterprise Linux for x86_64 9 x86_64
  • Red Hat Enterprise Linux for x86_64 8 x86_64
  • Red Hat Ceph Storage (OSD) 5 for RHEL 9 x86_64
  • Red Hat Ceph Storage (OSD) 5 for RHEL 8 x86_64
  • Red Hat Ceph Storage (MON) 5 for RHEL 9 x86_64
  • Red Hat Ceph Storage (MON) 5 for RHEL 8 x86_64

Fixes

  • BZ - 1623330 - [GSS]ceph.dir.layout layout xattr does not exist for subdirs on non default pool
  • BZ - 1889976 - [5.0] Ceph-Dashboard: Delete Host via dashboard UI is successful but host lists still shows the entry
  • BZ - 1901857 - [RFE] Implement (OpenStack/Keystone) Secure RBAC within RGW
  • BZ - 1910419 - [RFE][RGW] Allow rgw's ops log to be sent to a log file
  • BZ - 1910503 - [GSS] radosgw-admin bi purge returning ERROR: could not init current bucket info for a deleted bucket
  • BZ - 1938670 - [CephFS-NFS] remove misleading stdout messages when creating or deleting NFS cluster
  • BZ - 1939716 - (RFE) (RGW) more control functionality for RGW lifecycle process
  • BZ - 1942171 - [GSS][Mon][Health-Check] mon_pg_warn_max_object_skew needs lower bound implemented
  • BZ - 1962511 - [cephadm] devices with GPT header : Available devices fail to become OSDs
  • BZ - 1962575 - [RGW]: Post manual reshard, LC not processing on versioned bucket
  • BZ - 1966180 - Cephadm needs to support RHEL 9
  • BZ - 1966608 - [cephadm] incorrect information about usage of ceph orch osd rm
  • BZ - 1967901 - [RGW]: Crash on MS secondary when sync resumes post error
  • BZ - 1971694 - dashboard: HEALTH_ERR with OSDs full and nearfull but no (easy) way to guess which ones are
  • BZ - 1972506 - [RGW][LC]: Error message "lifecycle: ERROR: remove_expired_obj" recieved on doing radosgw-admin lc process.
  • BZ - 1976128 - Unable to create max luns(255) per target and containers are being crashed after some disks added to client
  • BZ - 1977888 - ceph-volume fails to get an inventory list
  • BZ - 1982962 - [5.0][4.2][Mixed Mode] : Metadata sync is stuck on the secondary site during an upgrade from 4.x to 5.0
  • BZ - 1988773 - [RFE] Provide warning when the 'require-osd-release' flag does not match current release.
  • BZ - 1996667 - [RGW]Bucket is listing deleted object after immediate restart of RGW service
  • BZ - 1997480 - [cephadm][Testathon] cephadm rm-cluster does not clean up leftover systemd units
  • BZ - 1997996 - [RFE] CephFS+NFS HA
  • BZ - 1999710 - [RFE] [Ceph-Dashboard] Implement BlueStore onode hit/miss counters into the dashboard
  • BZ - 2003925 - [GSS][RGW]no. of objects in a bucket at primary and secondary site are different
  • BZ - 2004171 - [RGW][Notification][kafka][MS]: awsRegion not populated with zonegroup in the event record
  • BZ - 2005960 - [RFE] MDS should note blocklist reason to client in session reject messages
  • BZ - 2006084 - [Workload-DFG] RGW pool creation code cleanup with pg autoscaler for mostly_omap use case
  • BZ - 2006214 - [cephadm] : traceback is dumped when unexpected input is provided for `format` argument
  • BZ - 2006771 - [RBD] - ISCSI - No error/warning displayed when incorrect parameters are specified in the yaml
  • BZ - 2008402 - [cee/sd] Need steps to clean up the ceph packages gracefully in order to install latest ceph-common packages on host.
  • BZ - 2009118 - [gss][rgw]RGW crashes frequently in the multi-site setup which preventing data replication
  • BZ - 2013085 - [cephadm] - upgrading cephadm package using yum returns a warning - userdel: cephadm mail spool (/var/spool/mail/cephadm) not found
  • BZ - 2013215 - Issue deploying a Ceph cluster with cephadm in a disconnected environment
  • BZ - 2015597 - [Workload-DFG] bucket stats misreporting LC expired objects
  • BZ - 2016936 - [RFE] Support for staggered/control upgrade of ceph nodes by role, hosts/rack
  • BZ - 2017389 - [RGW] Empty bucket "NotificationConfiguration to remove existing notifications from bucket not working as expected
  • BZ - 2018906 - [GSS] Stray daemon tcmu-runner is reported not managed by cephadm
  • BZ - 2019909 - [rbd-mirror] mirror image status - syncing_percent locked at 4 for an image of 50G+ during the sync
  • BZ - 2020618 - [DR] RBD Mirror snapshot processing stop after a few Failover-Relocate operations
  • BZ - 2024301 - [Workload-DFG] ceph orch can not resolve ceph-mgr address
  • BZ - 2024720 - [Workload-DFG] ceph failed to probe daemons or devices
  • BZ - 2027599 - [RFE][rgw-multisite] Capture current timestamp in radosgw-admin sync status command
  • BZ - 2028036 - [RFE][Ceph Dashboard] Add more panels/boxes for displaying CPU related stats
  • BZ - 2028628 - cephadm-clients playbook keyring isn't copied with the same name as specified by the keyring parameter
  • BZ - 2028693 - RHCS 4.x to 5.x upgrade - Cephadm adopt should choose the correct value for autotune_memory_target_ratio
  • BZ - 2028879 - [RGW-MultiSite] Input/Output error seen during radosgw-admin realm pull when using self signed certificate and inside cephadm shell
  • BZ - 2029307 - A deleted subvolumegroup when listed using "ceph fs subvolumegroup ls <vol_name>" shows as "_deleting"
  • BZ - 2030154 - [cee/sd][cephadm][RFE]ceph-volume should support automatic LVM creation on multipath devices.
  • BZ - 2030540 - mds: opening connection to up:replay/up:creating daemon causes message drop
  • BZ - 2031173 - [cee/sd][ceph-volume]ceph-volume log reports the error "ceph_volume.exceptions.ConfigurationError: Unable to load expected Ceph config at: /etc/ceph/ceph.conf"
  • BZ - 2034060 - [GSS][rgw] upgrading from 4.2z2 async --> 4.2z4 breaks legacy swift implicit tenant user accounts
  • BZ - 2034309 - [cee/sd][ceph-volume]ceph-volume fails to zap the lvm device(advanced configuration)
  • BZ - 2035179 - [cee/sd][cephadm][Not possible to bootstrap a cluster using multiple public networks]
  • BZ - 2035331 - [Workload-DFG] osd going down due to less fs.aio-max-nr value.
  • BZ - 2037752 - [Cephadm-ansible]: need support to run cephadm-preflight.yml with development repo.
  • BZ - 2039669 - [cephadm] input validation: Prevent MDS services with numeric service ids.
  • BZ - 2039741 - [GSS][RFE][CephFS][Add metadata information to CephFS volumes / subvolumes]
  • BZ - 2039816 - [GSS][RFE][Add the possibility to configure the LogFormat for the RGW opslog]
  • BZ - 2041563 - recursive scrub does not trigger stray reintegration
  • BZ - 2041571 - mds: fails to reintegrate strays if destdn's directory is full (ENOSPC)
  • BZ - 2042320 - CephFS: Log Failure details if subvolume clone fails.
  • BZ - 2042417 - [RADOS Stretch cluster] PG's stuck in remapped+peering after deployment of stretch mode
  • BZ - 2042602 - [RHCS 5] Performing a `ceph orch restart mgr` results in endless restart loop
  • BZ - 2043366 - [RGW][Indexless] RadosGW daemon crashed when performing a list operation on bucket having indexless configuration
  • BZ - 2043602 - CephFS: Failed to create clones if the PVC is filled with smaller size files
  • BZ - 2047487 - [RADOS-MGR]: Patches to address scalaility in mgr API are needed in Pacific
  • BZ - 2048681 - [RFE] Limit slow request details to cluster log
  • BZ - 2049272 - mgr/nfs: allow dynamic updates of CephFS NFS exports
  • BZ - 2050728 - CVE-2022-0670 ceph: user/tenant can obtain access (read/write) to any share
  • BZ - 2051640 - [RFE][ceph-ansible] : Usability : New playbooks to help users during Host OS upgrade
  • BZ - 2052936 - CephFS: mgr/volumes: the subvolume snapshot clone's uid/gid is incorrect
  • BZ - 2053468 - dashboard: restarting a failed daemon
  • BZ - 2053470 - dashboard: [day 2] complete RBD-mirroring dashboard support
  • BZ - 2053591 - [GSS] cephadm shell fails when running from a non-monitor node
  • BZ - 2053706 - [RFE] ceph snapshots datestamps lack a timezone field
  • BZ - 2053709 - dashboard: OSD creation revisited
  • BZ - 2054967 - [Ceph-Dashboard][RGW]:Performance details for daemon shows no data
  • BZ - 2055173 - [cee/sd][cephFS]Scheduled snapshots are not created after ceph-mgr restart
  • BZ - 2057307 - [RHCS 5.0z4][Alert manager shows bogus 'MTU mismatch' alerts] (doesn't take into account only network cards that are 'up', but also 'down' cards)
  • BZ - 2058038 - cephadm adopt fails to start OSDs when dmcrypt: true
  • BZ - 2058372 - RFE - Ceph RGW logging
  • BZ - 2058669 - [RFE] [RGW-MultiSite] conditional debugging improvements for multisite sync [Clone for 5.2]
  • BZ - 2060278 - ceph orch rm should validate the given service_name
  • BZ - 2061501 - [RFE] cephadm-ansible playbook for enabling rocksdb resharding and new format for pool stats
  • BZ - 2064171 - [CephFS-Snapshot] - Validate all arguments post a traceback during snapshot schedule
  • BZ - 2064627 - In the perf counter data for ceph mon instead of "prioritycache" key returns empty string "" as key
  • BZ - 2065443 - Ceph should use the updated container images for Prometheus
  • BZ - 2067987 - [CEE/SD][ceph-dashboard][RFE] Set the custom security banner on the Ceph Dashboard login page
  • BZ - 2068039 - [RHCS 5.0z4][Metadata sync failures and one bucket missing in the secondary site]
  • BZ - 2069720 - [DR] rbd_support: a schedule may get lost due to load vs add race
  • BZ - 2071458 - Objects from versioned buckets are getting re-appeared and also accessible (able to download) after its deletion through LC
  • BZ - 2073209 - [RFE] Enhance the blocklist commands to take a CIDR
  • BZ - 2073881 - [cee/sd][Cephadm] upgrading cephadm package always deletes the cephadm user and related ssh-keys
  • BZ - 2074105 - [RGW] Lifecycle is not cleaning up expired objects
  • BZ - 2076850 - Pushing image to internal registry fails sporadically/randomly with error ImageStream:Unknown
  • BZ - 2077827 - [Dashboard] "active+clean+snaptrim_wait " PGs showing as "Unknown" from dashboard
  • BZ - 2077843 - [cee/sd][Cephadm] 5.1 `ceph orch upgrade` adds the prefix "docker.io" to image in the disconnected environment
  • BZ - 2079089 - [RGW] Segmentation Fault in Object Deletion code path (RGWDeleteMultiObj)
  • BZ - 2080242 - [GSS][cephadm][`cephadm ls` is listing non-existing legacy services after a migration from ceph-ansible to cephadm]
  • BZ - 2080276 - [ceph-dashboard][ingress][ssl]: The ingress service with SSL has an entry for 'Private Key' which needs to be removed
  • BZ - 2081596 - [GSS][OCS 4.7.1][Clone operations stuck in 'pending']
  • BZ - 2081653 - dashboard: display upstream Ceph version in "about box" modal
  • BZ - 2081715 - [RBD-Mirror] images are reporting as "up+error" and description as "failed to unlink local peer from remote image" after failover operation
  • BZ - 2081929 - client: add option to disable collecting and sending metrics
  • BZ - 2083885 - ceph orch upgrade creates additional RGW pool
  • BZ - 2086419 - [All-Monitors-Crashed in 16.2.7/src/mon/PaxosService.cc: 193: FAILED ceph_assert(have_pending)]
  • BZ - 2086438 - More ceph-mgr daemons deployed than requested numbers of MGRs using placemnet option.
  • BZ - 2087236 - [16.2.8]Regressions with holding the GIL while attempting to lock a mutex
  • BZ - 2087736 - [cephadm] unexpected status code 404: https://10.8.128.81:8443//api/prometheus_receiver
  • BZ - 2087986 - [GSS][RGW-multisite][radosgw-admin command crashing while configuring multisite sync policies]
  • BZ - 2088602 - [RHCS][GSS][OCS 4.9] pod rook-ceph-rgw client.rgw.ocs.storagecluster.cephobjectstore.a crashed - thread_name:radosgw
  • BZ - 2088654 - Support --yes-i-know flag in bootstrap for 5.2
  • BZ - 2090357 - [rfe] support sse-s3 as tech preview (for support of transparent encryption in ODF)
  • BZ - 2090421 - [RFE] Cephadm Raw OSD Support
  • BZ - 2090456 - Skip ceph-infra during ceph-ansible rolling_update
  • BZ - 2092089 - [Workload-DFG] Setting osd_memory_target with osd/host does not succeed
  • BZ - 2092508 - [iscsi] tcmu container crash when cluster is upgraded from 5.1Z1 to 5.2
  • BZ - 2092554 - Remove fail on unsupported ansible version for ansible 2.10
  • BZ - 2092834 - rbd-mirror: don't prune non-primary snapshot when restarting delta sync
  • BZ - 2092838 - rbd-mirror: primary snapshot in-use by replayer can be unlinked and removed
  • BZ - 2092905 - [ceph-ansible][upgrade] Upgrade is blocked for 5.2 RGW Multisite
  • BZ - 2093017 - Creating osds with advance lvm configuration fails
  • BZ - 2093022 - after cluster upgrade to 5.2 , All OSDs on a given node are down with error "ERROR: osd init failed: (5) Input/output error" and log says " stop: uuid AAA != super.uuid BBB
  • BZ - 2093031 - [RHCS 5.2] dups.size logging + COT dups trim command + online dups trimming fix
  • BZ - 2093065 - [cee][cephfs] FAILED ceph_assert(dnl->get_inode() == in)
  • BZ - 2093788 - ceph mon crash reported while adding advance lvm scenario osds
  • BZ - 2094112 - cephadm repeatedly redeploying complex osds
  • BZ - 2094416 - [RFE] Provide Ansible ressources for automating RHCEPH deployments using cephadm
  • BZ - 2096882 - ganesha-rados-grace on IBM Power (ppc64le) always only shows usage information
  • BZ - 2096959 - [Workload-DFG] radosgw-admin bucket stats command failed during and after upgrade
  • BZ - 2097487 - [Ceph Dashboard] WDC Multipath cluster issues
  • BZ - 2098105 - Grafana container build failed on upgrading the version to 8.3.5
  • BZ - 2099348 - alertmanager.yml is configured with wrong webhook_configs URLs
  • BZ - 2099374 - module ceph_orch_host fails when state is absent
  • BZ - 2099828 - Upgrade from 4.3 to 5.2 is failing
  • BZ - 2099992 - [RGW]: radosgw-admin bucket check --fix command is not removing the orphan multipart entries
  • BZ - 2100503 - [Build] : several packages (example rbd-nbd) missing from rhcs rhel-9 composes
  • BZ - 2100915 - [Workload-DFG] OSD logs are being spammed with INFO: rgw_bucket_complete_op: writing bucket header
  • BZ - 2100967 - [rfe] translate radosgw-admin 2002 to ENOENT (POSIX); was: radosgw-admin bucket stats returns Unknown error 2002
  • BZ - 2102227 - [rbd-mirror] crash on assert in ImageWatcher::schedule_request_lock()
  • BZ - 2102365 - Setting rgw_data_notify_interval_msec=0 dos not disable async data notifications
  • BZ - 2103673 - [RFE] Restrict `ceph_orch_host` mudules `labels` parameter to list
  • BZ - 2103686 - `ceph_orch_host` module fails to set node `admin` when `set_admin_label` set to `true`
  • BZ - 2104780 - Cephadm should give warning/banner/note if User tries to upgrade ceph cluster from RHCS 5.1 to RHCS 5.2
  • BZ - 2105454 - [rbd-mirror] bogus "incomplete local non-primary snapshot" replayer error
  • BZ - 2105881 - MDS crash observed on 2 OCP clusters configured in Regional-DR setup with workloads running for sometime
  • BZ - 2107441 - [ceph-dashboard] OSD onode hit ratio grafana panel shows N/A
  • BZ - 2108656 - mds: FAILED ceph_assert(dir->get_projected_version() == dir->get_version())
  • BZ - 2109151 - Revert multisite/reshard commit that landed in BZ#2096959
  • BZ - 2109703 - MDS standby-replay daemon removed by monitors constantly during some degraded scenarios
  • BZ - 2110913 - Upgrade from RHCS 4x to RHCS 5.2 should not raise a HEALTH ERR
  • BZ - 2112101 - Cephfs PVC creation fails on a FIPS enabled cluster with clusterwide encryption